The RSA cryptosystem document discusses:
1) The RSA cryptosystem uses a public and private key to encrypt and decrypt messages based on large prime number factorization.
2) An example is provided where a message is encrypted with a public key and decrypted with a private key.
3) The security of RSA relies on the difficulty of factoring large numbers, as factorization algorithms take exponential time relative to the number of bits.
Ruby Supercomputing - Using The GPU For Massive Performance Speedup v1.1Preston Lee
For MountainWest RubyConf 2011 in Salt Lake City, Utah. By Preston Lee.
Twitter: @prestonism
Blog: http://prestonlee.com
Code: https://github.com/preston/ruby-gpu-examples
Slides: http://www.slideshare.net/preston.lee/
Gdc2011 direct x 11 rendering in battlefield 3drandom
The document discusses rendering techniques used in the Frostbite 2 game engine for Battlefield 3, including deferred shading with tile-based lighting computed using compute shaders. It describes how this approach reduces overdraw and bandwidth compared to traditional deferred rendering. It also discusses techniques for displacement mapping terrains, adaptive multi-sample anti-aliasing, and direct stereo 3D rendering support.
The cutoff frequency is defined as the frequency where the output/input ratio has a magnitude of 0.707, or -3dB. [1] It characterizes filtering devices like RC circuits. [2] Below the cutoff frequency signals are relatively unaffected, but above it there is much more attenuation. [3] For the example RC circuit with a 0.01s time constant, the calculated cutoff frequency is 100rad/s, which matches what is shown in the circuit's Bode diagram.
RetinaNet proposes a focal loss function to address the class imbalance issue in one-stage object detectors by down-weighting well-classified examples. It introduces a single, unified network with a backbone and two task-specific subnets: one for bounding box regression and one for classification. The backbone computes convolutional feature maps across scales, which are fused using a feature pyramid network to better detect objects at multiple scales. This discussion leads to exploring automatically designing the FPN architecture through neural architecture search.
The document describes using threshold-based agent models to optimize plant placement in a landscape. It proposes an agent-based algorithm where individual "plants" search the landscape for optimal locations based on their light and water requirements. A genetic algorithm approach is also mentioned. The goal is to maximize overall plant growth by finding placements where each plant meets a 70% threshold of its ideal growth conditions. Future work could include formal analysis and comparisons to determine how well the approach works at finding the optimal plant collection for a given landscape.
This document summarizes the uses of the Christoffel-Darboux (CD) kernel in the spectral theory of orthogonal polynomials. The CD kernel is defined in terms of orthogonal polynomials and can be interpreted as the integral kernel of a projection operator. It has applications in analyzing the zeros of orthogonal polynomials, Gaussian quadrature, variational principles, and characterizing the absolutely continuous, singular continuous, and pure point spectra of measures. Recent work has expanded its uses in studying universality in the bulk of the spectrum and properties of orthogonal polynomials.
The document summarizes the SSD object detection model. SSD is a single-shot detector that performs object detection by predicting bounding boxes and class probabilities from multiple feature maps extracted from a base network. SSD improves speed over two-stage detectors like Faster R-CNN by performing detection in one stage without region proposals. It achieves this by using default bounding boxes of different scales and aspect ratios on multiple feature maps to detect objects. The document explains SSD's model architecture, training procedure, and experimental results, showing that SSD achieves real-time speeds while maintaining accuracy compared to other detectors.
Chapter 14 discusses finite impulse response (FIR) filters. It introduces FIR filters and their properties, including that they can guarantee linear phase characteristics. It covers calculating filter coefficients using methods like the window method. It also discusses realizing FIR filters using direct form structures and designing FIR filters, which involves specifying the filter, calculating coefficients, selecting a structure, and implementing the filter.
Ruby Supercomputing - Using The GPU For Massive Performance Speedup v1.1Preston Lee
For MountainWest RubyConf 2011 in Salt Lake City, Utah. By Preston Lee.
Twitter: @prestonism
Blog: http://prestonlee.com
Code: https://github.com/preston/ruby-gpu-examples
Slides: http://www.slideshare.net/preston.lee/
Gdc2011 direct x 11 rendering in battlefield 3drandom
The document discusses rendering techniques used in the Frostbite 2 game engine for Battlefield 3, including deferred shading with tile-based lighting computed using compute shaders. It describes how this approach reduces overdraw and bandwidth compared to traditional deferred rendering. It also discusses techniques for displacement mapping terrains, adaptive multi-sample anti-aliasing, and direct stereo 3D rendering support.
The cutoff frequency is defined as the frequency where the output/input ratio has a magnitude of 0.707, or -3dB. [1] It characterizes filtering devices like RC circuits. [2] Below the cutoff frequency signals are relatively unaffected, but above it there is much more attenuation. [3] For the example RC circuit with a 0.01s time constant, the calculated cutoff frequency is 100rad/s, which matches what is shown in the circuit's Bode diagram.
RetinaNet proposes a focal loss function to address the class imbalance issue in one-stage object detectors by down-weighting well-classified examples. It introduces a single, unified network with a backbone and two task-specific subnets: one for bounding box regression and one for classification. The backbone computes convolutional feature maps across scales, which are fused using a feature pyramid network to better detect objects at multiple scales. This discussion leads to exploring automatically designing the FPN architecture through neural architecture search.
The document describes using threshold-based agent models to optimize plant placement in a landscape. It proposes an agent-based algorithm where individual "plants" search the landscape for optimal locations based on their light and water requirements. A genetic algorithm approach is also mentioned. The goal is to maximize overall plant growth by finding placements where each plant meets a 70% threshold of its ideal growth conditions. Future work could include formal analysis and comparisons to determine how well the approach works at finding the optimal plant collection for a given landscape.
This document summarizes the uses of the Christoffel-Darboux (CD) kernel in the spectral theory of orthogonal polynomials. The CD kernel is defined in terms of orthogonal polynomials and can be interpreted as the integral kernel of a projection operator. It has applications in analyzing the zeros of orthogonal polynomials, Gaussian quadrature, variational principles, and characterizing the absolutely continuous, singular continuous, and pure point spectra of measures. Recent work has expanded its uses in studying universality in the bulk of the spectrum and properties of orthogonal polynomials.
The document summarizes the SSD object detection model. SSD is a single-shot detector that performs object detection by predicting bounding boxes and class probabilities from multiple feature maps extracted from a base network. SSD improves speed over two-stage detectors like Faster R-CNN by performing detection in one stage without region proposals. It achieves this by using default bounding boxes of different scales and aspect ratios on multiple feature maps to detect objects. The document explains SSD's model architecture, training procedure, and experimental results, showing that SSD achieves real-time speeds while maintaining accuracy compared to other detectors.
Chapter 14 discusses finite impulse response (FIR) filters. It introduces FIR filters and their properties, including that they can guarantee linear phase characteristics. It covers calculating filter coefficients using methods like the window method. It also discusses realizing FIR filters using direct form structures and designing FIR filters, which involves specifying the filter, calculating coefficients, selecting a structure, and implementing the filter.
This document discusses distributed patterns and includes code examples of consistent hashing, distributed key-value storage using consistent hashing, and a pub-sub messaging pattern using ZeroMQ. It is a blog post by Eric Redmond with code snippets in Ruby demonstrating various distributed system patterns that programmers should know.
The document discusses data structures and provides details about various data structures like arrays, linked lists, stacks, queues, trees and graphs. It explains the concepts of abstract data types, linear and non-linear data structures. Key details about arrays, linked lists and their limitations are provided. Implementation of singly linked lists using C language is demonstrated through functions like creation, insertion, deletion and traversal of nodes.
The document discusses different approaches tried to switch device certificates programmatically on iOS, including using NSURLConnection, GTMHTTPFetcher, and ASIHTTPRequest. It also describes some hacks that worked like using the IP address instead of the hostname and adding a period to the hostname. The document appears to be technical notes on trying to authenticate and switch certificates during an HTTPS connection on iOS.
The document discusses algorithms complexity and data structures efficiency. It covers topics like asymptotic notation to analyze algorithm complexity, different time and memory complexities like constant, logarithmic, linear, quadratic, and exponential. Examples are provided to illustrate complexity calculations for algorithms with operations on arrays, matrices and recursion. Choosing the right data structures depends on the algorithm complexity and problem requirements.
- Data structures allow for efficient processing and organization of large volumes of data. They define logical relationships between related data elements and how operations like storage, retrieval, and access are carried out on the data.
- Common data structures include arrays, lists, stacks, queues, trees, graphs, dictionaries, maps, hash tables, sets, and lattices. Linear data structures like arrays and lists store elements in a linear order, while non-linear structures like trees and graphs have more complex relationships between elements.
- Linked lists are a data structure where each element contains a link to the next element, rather than using contiguous memory locations. This allows for more flexible insertion and deletion than arrays. Doubly linked lists add a link to
The document provides a summary of XPath and XSLT functions and operators. It lists XPath axes, node tests, and location paths. It also outlines XSLT elements for templates, variables, parameters, output, and other transformations. Key functions covered include node-set, last, position, count, id, local-name, namespace-uri, name, string, concat, and boolean.
Presentacion en ATLAS Calorimetry Calibration Workshop,"Clustering of very lo...CARMEN IGLESIAS
The document summarizes studies on clustering very low energy particles using the ATLAS calorimeter. It discusses using topoclusters with different seed and neighbor cell energy thresholds to better reconstruct particles below 10 GeV. Preliminary conclusions found that a seed threshold of 4 and neighbor threshold of 2 provided the best energy resolution and efficiency for pions, photons, and neutrons compared to other clustering algorithms. Further studies examined the impact of overlapping nearby particles on cluster reconstruction and found the new splitter algorithm in release 8.2.0 did not significantly improve resolution over not using splitting for particles separated by 0.1 or more in deltaR or below 0.1.
The document describes an algorithm called Extreme DXT Compression for compressing textures into DXT1 and DXT5 formats. It uses SSE2 and SSSE3 instructions for high performance and produces quality comparable to the Real-Time DXT Compression algorithm but with roughly 300% better performance. The algorithm tightly packs data, processes two 4x4 blocks at once, and minimizes comparisons, jumps and loops to optimize for processors like the Core 2 Duo.
This document summarizes random number generation using OpenCL. It discusses the Marsaglia polar method for generating random numbers and Gaussian pairs. It presents pseudocode for the Gaussian pair generation algorithm. Profiling results show that 54% of time is spent generating Gaussian pairs while 46% is for random numbers. The document also discusses optimization techniques like using local memory, coalesced global memory access, and choosing an optimal work group size. Performance results show near linear speedup from 1 to 8 GPUs.
If you've tried Apache Solr 1.4, you've probably had a chance to take it for a spin indexing and searching your data, and getting acquainted with its powerful, versatile new features and functions. Now, it's time to roll up your sleeves and really master what Solr 1.4 has to offer.
The document discusses digital image processing and two-dimensional transforms. It provides an agenda that covers two-dimensional mathematical preliminaries and two transforms: the discrete Fourier transform (DFT) and discrete cosine transform (DCT). It then discusses the DFT and DCT in more detail over several pages, covering properties, examples, and applications such as image compression.
1) This document provides biographical and career information about Cheng-Chin Chiang, including his education, work experience at KEK and NSRRC, and publications.
2) As part of his work at KEK, Cheng-Chin helped with the BELLE experiment to study CP violation and measure properties of B meson decays using an electron-positron collider.
3) At NSRRC, Cheng-Chin has worked on simulation and analysis of beam dynamics for the TPS synchrotron, including optimizing the lattice design and estimating effects like eddy currents during ramping.
Here are the key points about how evaluation cost scales with input size:
- For many algorithms, if the input size increases by a factor of k, the running time increases by a factor of k or faster. This is called polynomial time.
- For recursive algorithms like fibonacci, the running time increases exponentially with input size. Each additional input doubles the number of recursive calls.
- Space usage also often scales with input size. Recursive algorithms use additional stack space proportional to recursion depth, which grows with input.
- Caching/memoization can improve scaling by reusing results of subproblems. But it increases space usage.
- Data structure choice matters - hash tables allow O(1) access vs
Bioastronautics: Space Exploration and its Effects on the Human Body Course S...Jim Jenkins
This three-day course is intended for technical and managerial personnel who wish to be introduced to the effects of the space environment on humans. This course introduces bioastronautics from a fundamental perspective, assuming no prior knowledge of biology, physiology, or chemistry. The objective of the course is to provide the student with basic knowledge that will allow him or her to contribute more effectively to the human space exploration program. The human body, that through evolution is uniquely designed to function on the Earth, adapts to the space environment characterized by weightlessness and enhanced radiation. These alterations can impact the health and performance of astronauts, especially on return to the Earth.
Fundamentals Of Space Systems & Space Subsystems course samplerJim Jenkins
This course in space systems and space subsystems is for technical and management personnel who wish to gain an understanding of the important technical concepts in the development of space instrumentation, subsystems, and systems. The goal is to assist students to achieve their professional potential by endowing them with an understanding of the subsystems and supporting disciplines important to developing space instrumentation, space subsystems, and space systems. It designed for participants who expect to plan, design, build, integrate, test, launch, operate or manage subsystems, space systems, launch vehicles, spacecraft, payloads, or ground systems. The objective is to expose each participant to the fundamentals of each subsystem and their inter-relations, to not necessarily make each student a systems engineer, but to give aerospace engineers and managers a technically based space systems perspective. The fundamental concepts are introduced and illustrated by state-of-the-art examples. This course differs from the typical space systems course in that the technical aspects of each important subsystem are addressed.
ATI's Quantitative Methods course: Bridging Project Management and System Eng...Jim Jenkins
This 3-day course is de¬signed for the professional program manager, system engineer, or project manager engaged in technically challenging projects where close technical collaboration between engineering and management is a must. To that end, this course addresses major topics that bridge the disciplines of project management and system engineering. Each of the selected topics is presented from the perspective of quantitative methods. Students first learn a theory or narrative, and then related methods or practices. Ideas are demonstrated that are immediately applicable to programs and projects. Attendees receive a copy of the instructor’s text, Quantitative Methods in Project Management.
This document provides an introduction to digital television. It discusses analog TV standards and the conversion to digital with ITU-R BT.601 and BT.709 standards defining digital video formats. It also describes MPEG transport streams, the DVB system for content delivery over satellite, cable and terrestrial networks, and conditional access systems. Packetized elementary streams (PES) and program specific information (PSI) tables are also introduced.
This document provides an overview of satellite communications fundamentals. It discusses how satellites provide capabilities not available through landlines, such as mobility and quick implementation. However, satellites are not always the most cost effective solution due to limited frequency spectrum and spatial capacity. The document describes different types of satellite services and configurations, including geostationary and non-geostationary satellites. It also covers topics like frequency reuse, earth station antennas, and satellite link delays.
This document provides information about an upcoming course on space power systems hosted by the Applied Technology Institute. The 5-day course will cover topics such as orbital mechanics, spacecraft propulsion, flight mechanics, attitude determination and control, structural design, and space power systems. It will be taught by experts in the field and provide attendees with a complete set of course notes and the textbook "Space Systems".
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...Jim Jenkins
This four-day course gives a solid practical and intuitive understanding of the fundamental concepts of discrete and continuous probability. It emphasizes visual aspects by using many graphical tools such as Venn diagrams, descriptive tables, trees, and a unique 3-dimensional plot to illustrate the behavior of probability densities under coordinate transformations. Many relevant engineering applications are used to crystallize crucial probability concepts that commonly arise in aerospace CONOPS and tradeoffs
The document discusses quantization in analog-to-digital conversion. It describes the three processes of A/D conversion as sampling, quantization, and binary encoding. Quantization involves mapping amplitude values into a set of discrete values using a quantization interval or step size. The document discusses uniform quantization and how the quantization levels are determined. It also covers non-uniform quantization and provides examples and MATLAB code demonstrations of audio signal quantization.
This document discusses distributed patterns and includes code examples of consistent hashing, distributed key-value storage using consistent hashing, and a pub-sub messaging pattern using ZeroMQ. It is a blog post by Eric Redmond with code snippets in Ruby demonstrating various distributed system patterns that programmers should know.
The document discusses data structures and provides details about various data structures like arrays, linked lists, stacks, queues, trees and graphs. It explains the concepts of abstract data types, linear and non-linear data structures. Key details about arrays, linked lists and their limitations are provided. Implementation of singly linked lists using C language is demonstrated through functions like creation, insertion, deletion and traversal of nodes.
The document discusses different approaches tried to switch device certificates programmatically on iOS, including using NSURLConnection, GTMHTTPFetcher, and ASIHTTPRequest. It also describes some hacks that worked like using the IP address instead of the hostname and adding a period to the hostname. The document appears to be technical notes on trying to authenticate and switch certificates during an HTTPS connection on iOS.
The document discusses algorithms complexity and data structures efficiency. It covers topics like asymptotic notation to analyze algorithm complexity, different time and memory complexities like constant, logarithmic, linear, quadratic, and exponential. Examples are provided to illustrate complexity calculations for algorithms with operations on arrays, matrices and recursion. Choosing the right data structures depends on the algorithm complexity and problem requirements.
- Data structures allow for efficient processing and organization of large volumes of data. They define logical relationships between related data elements and how operations like storage, retrieval, and access are carried out on the data.
- Common data structures include arrays, lists, stacks, queues, trees, graphs, dictionaries, maps, hash tables, sets, and lattices. Linear data structures like arrays and lists store elements in a linear order, while non-linear structures like trees and graphs have more complex relationships between elements.
- Linked lists are a data structure where each element contains a link to the next element, rather than using contiguous memory locations. This allows for more flexible insertion and deletion than arrays. Doubly linked lists add a link to
The document provides a summary of XPath and XSLT functions and operators. It lists XPath axes, node tests, and location paths. It also outlines XSLT elements for templates, variables, parameters, output, and other transformations. Key functions covered include node-set, last, position, count, id, local-name, namespace-uri, name, string, concat, and boolean.
Presentacion en ATLAS Calorimetry Calibration Workshop,"Clustering of very lo...CARMEN IGLESIAS
The document summarizes studies on clustering very low energy particles using the ATLAS calorimeter. It discusses using topoclusters with different seed and neighbor cell energy thresholds to better reconstruct particles below 10 GeV. Preliminary conclusions found that a seed threshold of 4 and neighbor threshold of 2 provided the best energy resolution and efficiency for pions, photons, and neutrons compared to other clustering algorithms. Further studies examined the impact of overlapping nearby particles on cluster reconstruction and found the new splitter algorithm in release 8.2.0 did not significantly improve resolution over not using splitting for particles separated by 0.1 or more in deltaR or below 0.1.
The document describes an algorithm called Extreme DXT Compression for compressing textures into DXT1 and DXT5 formats. It uses SSE2 and SSSE3 instructions for high performance and produces quality comparable to the Real-Time DXT Compression algorithm but with roughly 300% better performance. The algorithm tightly packs data, processes two 4x4 blocks at once, and minimizes comparisons, jumps and loops to optimize for processors like the Core 2 Duo.
This document summarizes random number generation using OpenCL. It discusses the Marsaglia polar method for generating random numbers and Gaussian pairs. It presents pseudocode for the Gaussian pair generation algorithm. Profiling results show that 54% of time is spent generating Gaussian pairs while 46% is for random numbers. The document also discusses optimization techniques like using local memory, coalesced global memory access, and choosing an optimal work group size. Performance results show near linear speedup from 1 to 8 GPUs.
If you've tried Apache Solr 1.4, you've probably had a chance to take it for a spin indexing and searching your data, and getting acquainted with its powerful, versatile new features and functions. Now, it's time to roll up your sleeves and really master what Solr 1.4 has to offer.
The document discusses digital image processing and two-dimensional transforms. It provides an agenda that covers two-dimensional mathematical preliminaries and two transforms: the discrete Fourier transform (DFT) and discrete cosine transform (DCT). It then discusses the DFT and DCT in more detail over several pages, covering properties, examples, and applications such as image compression.
1) This document provides biographical and career information about Cheng-Chin Chiang, including his education, work experience at KEK and NSRRC, and publications.
2) As part of his work at KEK, Cheng-Chin helped with the BELLE experiment to study CP violation and measure properties of B meson decays using an electron-positron collider.
3) At NSRRC, Cheng-Chin has worked on simulation and analysis of beam dynamics for the TPS synchrotron, including optimizing the lattice design and estimating effects like eddy currents during ramping.
Here are the key points about how evaluation cost scales with input size:
- For many algorithms, if the input size increases by a factor of k, the running time increases by a factor of k or faster. This is called polynomial time.
- For recursive algorithms like fibonacci, the running time increases exponentially with input size. Each additional input doubles the number of recursive calls.
- Space usage also often scales with input size. Recursive algorithms use additional stack space proportional to recursion depth, which grows with input.
- Caching/memoization can improve scaling by reusing results of subproblems. But it increases space usage.
- Data structure choice matters - hash tables allow O(1) access vs
Bioastronautics: Space Exploration and its Effects on the Human Body Course S...Jim Jenkins
This three-day course is intended for technical and managerial personnel who wish to be introduced to the effects of the space environment on humans. This course introduces bioastronautics from a fundamental perspective, assuming no prior knowledge of biology, physiology, or chemistry. The objective of the course is to provide the student with basic knowledge that will allow him or her to contribute more effectively to the human space exploration program. The human body, that through evolution is uniquely designed to function on the Earth, adapts to the space environment characterized by weightlessness and enhanced radiation. These alterations can impact the health and performance of astronauts, especially on return to the Earth.
Fundamentals Of Space Systems & Space Subsystems course samplerJim Jenkins
This course in space systems and space subsystems is for technical and management personnel who wish to gain an understanding of the important technical concepts in the development of space instrumentation, subsystems, and systems. The goal is to assist students to achieve their professional potential by endowing them with an understanding of the subsystems and supporting disciplines important to developing space instrumentation, space subsystems, and space systems. It designed for participants who expect to plan, design, build, integrate, test, launch, operate or manage subsystems, space systems, launch vehicles, spacecraft, payloads, or ground systems. The objective is to expose each participant to the fundamentals of each subsystem and their inter-relations, to not necessarily make each student a systems engineer, but to give aerospace engineers and managers a technically based space systems perspective. The fundamental concepts are introduced and illustrated by state-of-the-art examples. This course differs from the typical space systems course in that the technical aspects of each important subsystem are addressed.
ATI's Quantitative Methods course: Bridging Project Management and System Eng...Jim Jenkins
This 3-day course is de¬signed for the professional program manager, system engineer, or project manager engaged in technically challenging projects where close technical collaboration between engineering and management is a must. To that end, this course addresses major topics that bridge the disciplines of project management and system engineering. Each of the selected topics is presented from the perspective of quantitative methods. Students first learn a theory or narrative, and then related methods or practices. Ideas are demonstrated that are immediately applicable to programs and projects. Attendees receive a copy of the instructor’s text, Quantitative Methods in Project Management.
This document provides an introduction to digital television. It discusses analog TV standards and the conversion to digital with ITU-R BT.601 and BT.709 standards defining digital video formats. It also describes MPEG transport streams, the DVB system for content delivery over satellite, cable and terrestrial networks, and conditional access systems. Packetized elementary streams (PES) and program specific information (PSI) tables are also introduced.
This document provides an overview of satellite communications fundamentals. It discusses how satellites provide capabilities not available through landlines, such as mobility and quick implementation. However, satellites are not always the most cost effective solution due to limited frequency spectrum and spatial capacity. The document describes different types of satellite services and configurations, including geostationary and non-geostationary satellites. It also covers topics like frequency reuse, earth station antennas, and satellite link delays.
This document provides information about an upcoming course on space power systems hosted by the Applied Technology Institute. The 5-day course will cover topics such as orbital mechanics, spacecraft propulsion, flight mechanics, attitude determination and control, structural design, and space power systems. It will be taught by experts in the field and provide attendees with a complete set of course notes and the textbook "Space Systems".
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...Jim Jenkins
This four-day course gives a solid practical and intuitive understanding of the fundamental concepts of discrete and continuous probability. It emphasizes visual aspects by using many graphical tools such as Venn diagrams, descriptive tables, trees, and a unique 3-dimensional plot to illustrate the behavior of probability densities under coordinate transformations. Many relevant engineering applications are used to crystallize crucial probability concepts that commonly arise in aerospace CONOPS and tradeoffs
The document discusses quantization in analog-to-digital conversion. It describes the three processes of A/D conversion as sampling, quantization, and binary encoding. Quantization involves mapping amplitude values into a set of discrete values using a quantization interval or step size. The document discusses uniform quantization and how the quantization levels are determined. It also covers non-uniform quantization and provides examples and MATLAB code demonstrations of audio signal quantization.
Mobile data traffic is growing year to year. Mobile operators are facing a different situation from voice legacy business. The growth of data traffic is not as high as one of revenue. They need to lower cost of Mbps to survive otherwise they will collapse.
The document provides an overview of video coding techniques used in video compression standards. It discusses how video compression exploits both the spatial and temporal redundancy in video signals. Key techniques covered include motion-compensated prediction, where a current frame is predicted from previously coded reference frames using motion vectors, and block-based motion estimation to determine the motion vectors. The document also outlines the generic architecture of video compression systems, which apply representation, quantization, and binary encoding steps to remove redundancy from video signals.
This document provides an introduction to Reed-Solomon codes, which are word-oriented, non-binary BCH codes that are simple, robust, and perform well for burst errors. Reed-Solomon codes use Galois field techniques to encode data into blocks of length 2^m - 1 by adding 2t parity check words, allowing the correction of t errors. The encoding and decoding procedures make use of a generator polynomial, Berlekamp-Massey algorithm, Chien search, and Forney algorithm. Future work may include more flexible generator polynomials or converting C54x codes to C55x codes.
ATI Systems Engineering - The People Dimension Professional Development Techn...Jim Jenkins
This course provides perspective and insight into a part of the system engineering process that is critical to the success of any project: the people, and the leadership and management of people. It includes a short review of system engineering and it's associated processes, especially the people related aspects. It discusses the subjects of leadership and management, and their differences, and how they relate to system engineering.
The course is valuable to program and Line Management, as well as to technical and administrative personnel who are a part of the system engineering process.
ATI's Total Systems Engineering Development & Management technical training c...Jim Jenkins
This three-day ATI professional development course, Total Systems Engineering Development & Management, course, covers four system
development fundamentals: (1) a sound
engineering management infrastructure within
which work may be efficiently accomplished, (2)
define the problem to be solved (requirements and
specifications), (3) solve the problem (design,
integration, and optimization), and (4) prove that
the design solves the defined problem
(verification).
ATI's Systems Engineering - Requirements technical training course samplerJim Jenkins
This ATI professional
development course, Systems Engineering - Requirements, provides system engineers, team leaders, and managers with a clear understanding about how to develop good specifications affordably using modeling methods that encourage identification of the essential characteristics that must be respected in the subsequent design process.
Euler's theorem states that for any plane graph, the number of vertices (v) minus the number of edges (e) plus the number of faces (f) equals 2. The document proves this theorem by considering a minimal tree (T) within the graph and its dual tree (D), showing that the number of edges of T and D sum to the total edges (e) of the original graph. Some applications of the theorem are that any plane graph contains an edge of degree 5 or higher and any finite set of points not all on a line contains a line with exactly two points.
This three day course is intended for practicing systems engineers who want to learn how to apply model-driven systems Successful systems engineering requires a broad understanding of the important principles of modern spacecraft communications. This three-day course covers both theory and practice, with emphasis on the important system engineering principles, tradeoffs, and rules of thumb. The latest technologies are covered. <p>
Applied Physical Oceanography And ModelingJim Jenkins
This three-day course is designed for engineers, physicists, acousticians, climate scientists, and managers who wish to enhance their understanding of this discipline or become familiar with how the ocean environment can affect their individual applications. Examples of remote sensing of the ocean, in situ ocean observing systems and actual examples from recent oceanographic cruises are given.
The students will be able to access educational Java applets to visualize waves and key acoustic phenomena: Click here to view
Other web-based resources include acoustic demonstration podcasts and iPod apps to conduct acoustic measurements. The student will also be armed with Internet resources for up-to-date information on sonar systems, undersea sound propagation models, and environmental databases. The student will leave with a clear understanding of how the ocean influences undersea sound propagation and scattering.
The document discusses the MPEG-4 standard for multimedia coding and transmission. MPEG-4 allows coding of audio-visual objects rather than just pixels, supports content-based interactivity, and aims for universal access and high compression over a wide bitrate range. It describes MPEG-4 video coding including coding of video object planes using motion compensation and DCT, as well as shape coding using binary and grayscale alpha planes.
Total systems engineering_development_management_course_samplerJim Jenkins
The document provides information about a training course on total systems engineering development and management from the Applied Technology Institute (ATI). It includes an outline of the course topics covering the system engineering life cycle from requirements to management. Additionally, it provides background on the instructor, Jeff Grady, and examples of structured analysis diagrams that are used in requirements analysis and system architecture definition. The course aims to teach proven practices for applying systems engineering principles across diverse product domains.
This document discusses asymmetric key cryptography and the RSA cryptosystem. It begins by distinguishing between symmetric and asymmetric key cryptography, noting they serve complementary roles. It then covers the basics of public key cryptography using two keys: a private key and public key. The RSA cryptosystem is described as the most common public key algorithm, involving key generation, encryption with the public key, and decryption with the private key. Examples are provided to illustrate the RSA process. Potential attacks on RSA like factorization are also summarized along with recommendations to strengthen security.
The document discusses the RSA cryptosystem. It begins by explaining that RSA is an important public-key cryptosystem based on the difficulty of factoring large integers. It then provides examples of how RSA works, including choosing prime numbers p and q to generate the public and private keys, and using modular exponentiation to encrypt and decrypt messages. The document also discusses the importance of integer factorization for the security of RSA, and considerations for designing a secure RSA system, such as choosing sufficiently large prime numbers.
RSA is a widely used public-key cryptosystem. It works by generating a public and private key pair. The public key is used for encryption and digital signatures while the private key is used for decryption and signature verification. Key generation involves finding two prime numbers p and q, computing the modulus n as their product, and using these values to calculate the public and private exponents e and d respectively.
RSA is a widely used public-key cryptosystem. It works by generating a public and private key pair. The public key is used for encryption and digital signatures while the private key is used for decryption and signature verification. Key generation involves finding two prime numbers p and q, computing the modulus n as their product, and using these values to calculate the public and private exponents e and d respectively.
RSA is a widely used public-key cryptosystem. It works by generating a public and private key pair. The public key is used for encryption and digital signatures while the private key is used for decryption and signature verification. Key generation involves finding two prime numbers p and q, computing the modulus n as their product, and using these values to calculate the public and private exponents e and d respectively.
Data Mining With A Simulated Annealing Based Fuzzy Classification SystemJamie (Taka) Wang
The document presents the results of experiments comparing the proposed fuzzy classifier called SAFCS to other classification algorithms on several datasets. SAFCS achieved the highest average accuracy on both the training and test sets for most of the datasets, outperforming algorithms like C4.5, IBk, Naive Bayes, SVM, GAssist and XCS. This demonstrates the effectiveness of the SAFCS approach for constructing fuzzy classifiers and finding a set of fuzzy rules through simulated annealing optimization.
This is the DeepStochLog presentation, published at AAAI22 (Association for the Advancement of Artificial Intelligence 2022).
Authors: Thomas Winters*, Giuseppe Marra*, Robin Manhaeve, Luc De Raedt
*equal contribution
Code: https://github.com/ml-kuleuven/deepstochlog
Abstract: Recent advances in neural symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural symbolic framework based on stochastic definite clause grammars, a type of stochastic logic program, which defines a probability distribution over possible derivations. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural symbolic learning tasks.
We experiment with Wiener's attack to break RSA when the secret exponent is short, meaning it is smaller than one quarter of the public modulus size. We discuss cryptanalysis details and present demos of the attack. Our very minor extension of Wiener's attack is also discussed.
If we have an RSA 2048 bits configuration, but our private exponent d is only about 512 bits, then the above attack breaks RSA in a few seconds.
This work uses Continued Fractions to derive the private keys from the given public keys. It turned out that one can derive the private exponent d by approximating it as a ratio of e/n, both are public values.
In a default settings of standard RSA libaries, this attack and my minor extension are not relevant (to the best of our knowledge). However, if we configure our library to choose a very large public encryption exponent e, then our private decryption exponent d could be short enough to mount an attack.
The document discusses the RSA encryption algorithm. It begins by explaining how to generate the public and private keys, including choosing two prime numbers p and q, computing phi(n) as (p-1)(q-1), and selecting the public and private exponents e and d. It then explains how RSA encryption and decryption work using these keys. The document also discusses some ways RSA can be broken, such as with a quantum computer using Shor's algorithm to find the prime factors of n through periodicity. It provides examples to illustrate RSA key generation and encryption/decryption.
Homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This document discusses homomorphic encryption techniques including partially homomorphic encryptions that support either addition or multiplication operations, and fully homomorphic encryption introduced by Craig Gentry that supports both types of operations. It also covers the use of ideal lattices in lattice-based cryptosystems and the bootstrapping technique used to "refresh" ciphertexts and prevent noise from accumulating during homomorphic computations.
The document describes the RSA public-key cryptosystem. It discusses how RSA uses a public key for encryption and a private key for decryption. It also outlines the key steps for RSA, including key generation where two large prime numbers are used to calculate the public and private keys, encryption using the public key, and decryption using the private key. An example is provided to illustrate encrypting and decrypting a message using RSA.
RSA is a public-key cryptography algorithm used for encryption, digital signatures, and key exchange. It uses a public and private key pair based on the difficulty of factoring large prime numbers. To encrypt a message, it is encrypted with the recipient's public key. To decrypt, the recipient uses their private key. The security of RSA relies on the difficulty of determining the prime factors of a large number.
The document demonstrates breaking a 768-bit RSA encryption by factorizing the public key's modulus into its prime factors. It begins with an overview of RSA and integer factorization, then shows the encryption of a sample plaintext under a 768-bit public key. Finally, it programs and runs the decryption using the pre-computed prime factors of the modulus, successfully recovering the original plaintext in under a second. The document concludes that RSA security relies on the computational difficulty of integer factorization and recommends using key sizes of 1024 bits or more.
Everything I always wanted to know about crypto, but never thought I'd unders...Codemotion
For many years, I had entirely given up on ever understanding the anything about cryptography. However, I’ve since learned it’s not nearly as hard as I thought to understand many of the important concepts. In this talk, I’ll take you through some of the underlying principles of modern applications of cryptography. We’ll talk about our goals, the parts are involved, and how to prevent and understand common vulnerabilities. This’ll help you to make better choices when you implement crypto in your products, and will improve your understanding of how crypto is applied to things you already use.
Can we reveal the RSA private exponent d from its public key <e, n>? We study this question for two specific cases: e = 3 and e = 65537. Using demos, we verify that RSA reveals the most significant half of the private exponent d when the public exponent e is small. For example, for 2048-bit RSA, the most significant 1024 bits are revealed!
This document provides an overview of the mathematics and algorithms behind the RSA cryptosystem. It discusses the key concepts of RSA, including:
- The use of modular arithmetic and Euler's totient function in defining the group ZN* used for encryption/decryption.
- How the RSA trapdoor permutation allows encryption with a public key and decryption with a private key.
- The generation of public/private key pairs and the security properties provided by the use of large prime numbers.
- Standard algorithms for encrypting and signing messages with RSA.
- Implementation techniques like square-and-multiply to improve computational efficiency.
- Standards and protocols like OAEP and PSS that strengthen RSA against various attacks
The Cryptography puzzle discussed here is part of an online challenge. I demonstrate how I broke RSA when random prime numbers were common among a set of keys. I discuss basic metrics as well as implementation/design of my exploit scripts, too.
RSA and OAEP
Diffe-Hellman Key Exchange and its Security Aspects
Model of Asymmetric Key Cryptography
Factorization and other methods for Public Key Cryptography
The document provides an overview of MPEG-4, a standard that offers both advanced audio and video codecs as well as tools for combining multimedia such as audio, video, graphics and interactivity. It was developed through an open international process to select the best technologies. MPEG-4 codecs like AVC and AAC provide high compression efficiency, having been adopted for HDTV, mobile video, and digital music. Its rich media tools allow interactive experiences combining different media types.
This document provides an overview of Codan's 6700/6900 series block up converter (BUC) systems and components. It describes the BUC, low-noise block converter (LNB), and redundancy systems. It also covers installation, operation, and troubleshooting of the systems. The document contains information on frequency bands, conversion plans, interfaces, cable connections, monitor/control, commands, maintenance procedures, and compliance standards.
This document discusses digital set-top boxes (STBs) and related standards. It covers:
1) The DVB standards for digital TV broadcasting via different transmission media, including DVB-T for terrestrial, DVB-S for satellite, and DVB-C for cable. These share source coding/compression and service multiplexing standards.
2) STBs will be needed until integrated digital TVs are cheaper. Affordable STBs are key for digital TV adoption. Common standards help lower STB costs through economies of scale.
3) "Open architecture" and "interoperability" mean the STB functionality is defined by public standards and can receive services across networks, respectively. The
The document discusses DCT/IDCT concepts and applications. It provides an introduction to DCT and IDCT, explaining that they are used widely in video and audio compression. It describes the DCT and IDCT functions and how they work to transform signals between spatial and frequency domains. Examples of one-dimensional and two-dimensional DCT/IDCT equations are also given. Finally, common applications of DCT/IDCT compression techniques are listed, such as in DVD players, cable TV, graphics cards, and medical imaging systems.
This document discusses image compression using the discrete cosine transform (DCT). It develops simple Mathematica functions to compute the 1D and 2D DCT. The 1D DCT transforms a list of real numbers into elementary frequency components. It is computed via matrix multiplication or using the discrete Fourier transform with twiddle factors. The 2D DCT applies the 1D DCT to rows and then columns of an image, making it separable. These functions illustrate how Mathematica can be used to prototype image processing algorithms.
DVB-S2 is the second-generation specification for satellite broadcasting developed by DVB in 2003. It uses more advanced channel coding (LDPC codes) and modulation formats (QPSK, 8PSK, 16APSK, 32APSK) for a 30% increase in transmission capacity over DVB-S. DVB-S2 allows for adaptive coding and modulation to optimize transmission for each user. It is designed for broadcast, interactive, and professional applications with flexibility to handle different transponder characteristics and content formats.
The STi7167 is an integrated system-on-chip that combines a configurable DVB-T or DVB-C demodulator with STB decoding and display functions. It provides advanced HD and SD video decoding, audio decoding, graphics processing, and connectivity options. The chip's integrated features allow for low cost and small size STB designs for cable or terrestrial networks.
This document provides an overview of service information (SI) in digital video broadcasting (DVB) systems, including sections like the network information section (NIT), service description section (SDT), bouquet association section (BAT), program association section (PAT), conditional access section (CAT), transport stream description section (TSDT), event information section (EIT), and running status section (RST). It includes syntax diagrams and details for each section, such as table IDs, section lengths, descriptors, and other fields. It also provides the PID and refresh interval requirements for each table type.
1) The document describes a modification to the Huffman coding used in JPEG image compression. It proposes pairing each non-zero DCT coefficient with the run-length of subsequent (rather than preceding) zero coefficients.
2) This allows using separate optimized Huffman code tables for each DCT coefficient position, improving compression by 10-15% over standard JPEG coding.
3) The decoding procedure is not changed and no end-of-block marker is needed, providing advantages with no increase in complexity.
Dani Pedrosa won the MotoGP race at Laguna Seca, finishing just 0.344 seconds ahead of Valentino Rossi in second and 1.926 seconds ahead of Jorge Lorenzo in third. Casey Stoner finished fourth, over 12 seconds behind Pedrosa. There were several crashes during the race, with Andrea Dovizioso, Sete Gibernau, and Gabor Talmacsi all falling out of contention. James Toseland received a ride through penalty for a jump start.
The document provides implementation guidelines for using the DVB Simulcrypt standard, including describing the architecture and protocols, clarifying differences between protocol versions, explaining state diagrams and behaviors, and providing recommendations for error handling, redundancy management, and custom signaling profiles to facilitate reliable and efficient Simulcrypt headend implementation.
1) The document discusses quantization and pulse code modulation (PCM) in voice signal encoding. PCM assigns 256 possible values to digitally represent analog voice samples, divided into chords and steps on a linear scale.
2) A logarithmic quantization scale is better than a linear one for voice signals, as it allocates more quantization steps to lower amplitudes prevalent in speech. This "compressed encoding" improves fidelity.
3) Quantization error occurs when samples with different amplitudes are assigned the same digital value, distorting the reconstructed waveform. Compression helps maintain a higher signal-to-noise ratio especially for low amplitudes.
This document provides implementation guidelines for the DVB Simulcrypt standard. It describes the architecture and protocols involved in simulcrypt systems, including the ECMG protocol between the security client system and conditional access modules, and the EMMG/PDG protocol between conditional access modules and multiplex equipment. The document outlines differences between version 1 and 2 of the standards, and provides recommendations for compliance. It also includes detailed state diagrams and descriptions of the protocols involved.
The Event Logger monitors and logs Digital Program Insertion (DPI) messages to verify correct transmission of signals via satellite. It watches for configured GPI state changes that indicate an expected DPI message. If the message is received on time, it is logged as a matched event. If not received on time, it is flagged as missed. The Event Logger also decodes DPI messages to help diagnose issues, and is compatible with various encoding systems. It has 6 ASI inputs, 108 GPI sensors, and logs data in real-time and for archiving.
This document discusses the basics of BISS scrambling. It describes BISS mode 1, which uses a session word, and BISS mode E, which encrypts the session word using an identifier and encryption algorithm. BISS mode E provides an additional layer of protection for transmitting the session word. The document also covers calculating the encrypted session word, using buried and injected identifiers, and how to operate scramblers in the different BISS modes.
1) Reed-Solomon codes are a type of error-correcting code invented in 1960 that can detect and correct multiple symbol errors. They work by encoding data into redundant symbols that can be used to detect and locate errors.
2) Reed-Solomon codes are particularly good at correcting burst errors, where a block of symbols are corrupted together by noise. Even if an entire block of bits is corrupted, the code can still correct the errors by replacing the corrupted symbol.
3) The error correction capability of Reed-Solomon codes increases with larger block sizes, as noise is averaged over more symbols. However, implementing Reed-Solomon codes also becomes more complex with higher redundancy.
This document describes the head-end architecture and synchronization for digital video broadcasting using SimulCrypt. It outlines the system components including an event information scheduler, SimulCrypt synchronizer, entitlement control message generator, entitlement management message generator, and multiplexer. It also describes the interfaces between these components, covering processes like channel and stream establishment and closure, as well as bandwidth allocation and status reporting.
This document provides the European standard for the frame structure, channel coding and modulation for a second generation digital transmission system for cable systems (DVB-C2). It defines the system architecture and specifications for input processing, bit-interleaved coding and modulation, data slice packet generation, layer 1 part 2 signalling, frame building, and OFDM generation. The standard aims to provide improved performance for cable systems over the existing DVB-C standard.
This document discusses Euler's formula, which relates the number of vertices (V), edges (E), and faces (P) of a polyhedron. Through experimenting with attaching polygons and bending shapes, students derive the formula V - E + P = 2 for polyhedra. Removing a face shows the formula still holds, revealing why it is true for any polyhedron. Students learn the formula can distinguish polyhedra from other 3D shapes by calculating the Euler characteristic V - E + P.
This document provides a 3-sentence summary of the given document on video compression:
The document discusses video compression algorithms used in standards like MPEG, explaining how video compression works through motion estimation, discrete cosine transformation, quantization, and entropy coding to reduce file sizes. It analyzes the tradeoff between compression ratio and quality, and provides details on common video compression standards and their applications. The MPEG standards are described in particular detail, outlining the different frame types and compression steps used to remove spatial and temporal redundancies from video for more efficient storage and transmission.
1. RSA Cryptosystem 6/8/2002 2:20 PM
Outline
Euler’s theorem (§10.1.3)
RSA cryptosystem (§10.2.3)
RSA Cryptosystem Definition
Example
Bits PCs Memory Security
430 1 128MB Correctness
760 215,000 4GB
Algorithms for RSA
1,020 342×106 170GB
Modular power (§10.1.4)
1,620 1.6×1015 120TB
Modular inverse (§10.1.5)
Randomized primality testing (§10.1.6)
6/8/2002 2:20 PM RSA Cryptosystem 1 6/8/2002 2:20 PM RSA Cryptosystem 2
Euler’s Theorem RSA Cryptosystem
The multiplicative group for Zn, denoted with Z*n, is the subset of Setup: Example
elements of Zn relatively prime with n n = pq, with p and q Setup:
The totient function of n, denoted with φ(n), is the size of Z*n primes p = 7, q = 17
Example e relatively prime to n = 7⋅17 = 119
φ(n) = (p − 1) (q − 1) φ(n) = 6⋅16 = 96
Z*10 = { 1, 3, 7, 9 } φ(10) = 4
d inverse of e in Zφ(n) e=5
If p is prime, we have
Keys: d = 77
Z*p = {1, 2, …, (p − 1)} φ(p) = p − 1
Public key: KE = (n, e) Keys:
Euler’s Theorem public key: (119, 5)
Private key: KD = d
For each element x of Z*n, we have xφ(n) mod n = 1 private key: 77
Example (n = 10) Encryption: Encryption:
3φ(10) mod 10 = 34 mod 10 = 81 mod 10 = 1 Plaintext M in Zn M = 19
7φ(10) mod 10 = 74 mod 10 = 2401 mod 10 = 1 C = Me mod n C = 195 mod 119 = 66
9φ(10) mod 10 = 94 mod 10 = 6561 mod 10 = 1 Decryption: Decryption:
M = Cd mod n C = 6677 mod 119 = 19
6/8/2002 2:20 PM RSA Cryptosystem 3 6/8/2002 2:20 PM RSA Cryptosystem 4
Complete RSA Example Security
Setup: Encryption The security of the RSA In 1999, a 512-bit number was
cryptosystem is based on the factored in 4 months using the
p = 5, q = 11 C = M3 mod 55 widely believed difficulty of following computers:
n = 5⋅11 = 55 Decryption factoring large numbers
160 175-400 MHz SGI and Sun
φ(n) = 4⋅10 = 40 M = C27 mod 55 The best known factoring
algorithm (general number 8 250 MHz SGI Origin
e=3
field sieve) takes time 120 300-450 MHz Pentium II
d = 27 (3⋅27 = 81 = 2⋅40 + 1) exponential in the number of 4 500 MHz Digital/Compaq
bits of the number to be
factored Estimated resources needed to
M 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 factor a number within one year
The RSA challenge, sponsored
C 1 8 27 9 15 51 13 17 14 10 11 23 52 49 20 26 18 2 by RSA Security, offers cash Bits PCs Memory
M 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 prizes for the factorization of
430 1 128MB
C 39 25 21 33 12 19 5 31 48 7 24 50 36 43 22 34 30 16 given large numbers
In April 2002, prizes ranged 760 215,000 4GB
M 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
C 53 37 29 35 6 3 32 44 45 41 38 42 4 40 46 28 47 54 from $10,000 (576 bits) to 1,020 342×106 170GB
$200,000 (2048 bits) 1,620 1.6×1015 120TB
6/8/2002 2:20 PM RSA Cryptosystem 5 6/8/2002 2:20 PM RSA Cryptosystem 6
1
2. RSA Cryptosystem 6/8/2002 2:20 PM
Correctness Algorithmic Issues
We show the correctness of Thus, we obtain The implementation of Setup
the RSA cryptosystem for the (Me)d mod n = the RSA cryptosystem Generation of random
case when the plaintext M Med mod n = requires various numbers with a given
does not divide n Mkφ(n) + 1 mod n = number of bits (to generate
algorithms
Namely, we show that MMkφ(n) mod n = candidates p and q)
M (Mφ(n))k mod n = Overall Primality testing (to check
(Me)d mod n = M
M (Mφ(n) mod n)k mod n = Representation of integers that candidates p and q are
Since ed mod φ(n) = 1, there is of arbitrarily large size and
M (1)k mod n = prime)
an integer k such that arithmetic operations on
M mod n = Computation of the GCD (to
ed = kφ(n) + 1 them verify that e and φ(n) are
Since M does not divide n, by M Encryption relatively prime)
Euler’s theorem we have See the book for the proof of Modular power Computation of the
correctness in the case when multiplicative inverse (to
Mφ(n) mod n = 1 the plaintext M divides n Decryption compute d from e)
Modular power
6/8/2002 2:20 PM RSA Cryptosystem 7 6/8/2002 2:20 PM RSA Cryptosystem 8
Modular Power Modular Inverse
The repeated squaring Example Theorem Given positive integers a and b,
algorithm speeds up the 318 mod 19 (18 = 10010) Given positive integers a the extended Euclid’s algorithm
computation of a modular and b, let d be the smallest computes a triplet (d,i,j) such that
Q1 = 31 mod 19 = 3
power ap mod n d = gcd(a,b)
Q2 = (32 mod 19)30 mod 19 = 9 positive integer such that
Write the exponent p in binary d = ia + jb
Q3 = (92 mod 19)30 mod 19 = d = ia + jb
p = pb − 1 pb − 2 … p1 p0 To test the existence of and
81 mod 19 = 5 for some integers i and j.
Start with Q4 = (52 mod 19)31 mod 19 = We have compute the inverse of x ∈ Zn, we
Q1 = apb − 1 mod n (25 mod 19)3 mod 19 =
execute the extended Euclid’s
d = gcd(a,b) algorithm on the input pair (x,n)
Repeatedly compute 18 mod 19 = 18
Example Let (d,i,j) be the triplet returned
Qi = ((Qi − 1)2 mod n)apb − i mod n Q5 = (182 mod 19)30 mod 19 = a = 21
(324 mod 19) mod 19 = d = ix + jn
We obtain b = 15
17⋅19 + 1 mod 19 = 1 Case 1: d = 1
Qb = ap mod n d=3
i is the inverse of x in Zn
The repeated squaring p5 − 1 1 0 0 1 0 i = 3, j = −4
Case 2: d > 1
algorithm performs O (log p) 2 p5 − i 3 1 1 3 1 3 = 3⋅21 + (−4)⋅15 =
arithmetic operations 63 − 60 = 3 x has no inverse in Zn
Qi 3 9 5 18 1
6/8/2002 2:20 PM RSA Cryptosystem 9 6/8/2002 2:20 PM RSA Cryptosystem 10
Pseudoprimality Testing Randomized Primality Testing
The number of primes less than or equal to n is about n / ln n Compositeness witness function
witness(x, n) with error probability Algorithm RandPrimeTest(n, k)
Thus, we expect to find a prime among, O(b) randomly generated
q for a random variable x Input integer n,confidence
numbers with b bits each parameter k and composite
Case 1: n is prime
Testing whether a number is prime (primality testing) is believed witness function witness(x,n)
witness w(x, n) = false with error probability q
to be a hard problem Case 2: n is composite Output an indication of
An integer n ≥ 2 is said to be a base-x pseudoprime if witness w(x, n) = false with whether n is composite or prime
xn − 1 mod n = 1 (Fermat’s little theorem) probability q < 1 with probability 2−k
Composite base-x pseudoprimes are rare: Algorithm RandPrimeTest tests
whether n is prime by repeatedly t ← k/log2(1/q)
A random 100-bit integer is a composite base-2 pseudoprime with for i ← 1 to t
evaluating witness(x, n)
probability less than 10-13
A variation of base- x x ← random()
The smallest composite base-2 pseudoprime is 341
pseudoprimality provides a if witness(x,n)= true
Base-x pseudoprimality testing for an integer n: suitable compositeness witness return “n is composite”
Check whether xn − 1 mod n = 1 function for randomized primality return “n is prime”
Can be performed efficiently with the repeated squaring algorithm testing (Rabin-Miller algorithm)
6/8/2002 2:20 PM RSA Cryptosystem 11 6/8/2002 2:20 PM RSA Cryptosystem 12
2